Hyper-Sparse Optimal Aggregation

نویسندگان

  • Stéphane Gaïffas
  • Guillaume Lecué
چکیده

Given a finite set F of functions and a learning sample, the aim of an aggregation procedure is to have a risk as close as possible to risk of the best function in F . Up to now, optimal aggregation procedures are convex combinations of every elements of F . In this paper, we prove that optimal aggregation procedures combining only two functions in F exist. Such algorithms are of particular interest when F contains many irrelevant functions that should not appear in the aggregation procedure. Since selectors are suboptimal aggregation procedures, this proves that two is the minimal number of elements of F required for the construction of an optimal aggregation procedure in every situations. Then, we perform a numerical study for the problem of selection of the regularization parameters of the Lasso and the Elastic-net estimators. We compare on simulated examples our aggregation algorithms to aggregation with exponential weights, to Mallow’s Cp and to cross-validation selection procedures.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal Aggregation of Classifiers and Boosting Maps in Functional Magnetic Resonance Imaging

We study a method of optimal data-driven aggregation of classifiers in a convex combination and establish tight upper bounds on its excess risk with respect to a convex loss function under the assumption that the solution of optimal aggregation problem is sparse. We use a boosting type algorithm of optimal aggregation to develop aggregate classifiers of activation patterns in fMRI based on loca...

متن کامل

Deviation Optimal Learning using Greedy Q-aggregation

Given a finite family of functions, the goal of model selection aggregation is to construct a procedure that mimics the function from this family that is the closest to an unknown regression function. More precisely, we consider a general regression model with fixed design and measure the distance between functions by the mean squared error at the design points. While procedures based on expone...

متن کامل

Adaptive Minimax Estimation over Sparse q-Hulls

Given a dictionary of Mn initial estimates of the unknown true regression function, we aim to construct linearly aggregated estimators that target the best performance among all the linear combinations under a sparse q-norm (0 ≤ q ≤ 1) constraint on the linear coefficients. Besides identifying the optimal rates of aggregation for these lq-aggregation problems, our multi-directional (or adaptive...

متن کامل

Exploratory Sparse Models for Face Classification

In this paper, a class of sparse regularization methods are considered for developing and exploring sparse classifiers for face recognition. The sparse classification method aims to both select the most important features and maximize the classification margin, in a manner similar to support vector machines. An efficient process for directly calculating the complete set of optimal, sparse class...

متن کامل

Adaptive minimax regression estimation over sparse lq-hulls

Given a dictionary of Mn predictors, in a random design regression setting with n observations, we construct estimators that target the best performance among all the linear combinations of the predictors under a sparse `q-norm (0 ≤ q ≤ 1) constraint on the linear coefficients. Besides identifying the optimal rates of convergence, our universal aggregation strategies by model mixing achieve the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Journal of Machine Learning Research

دوره 12  شماره 

صفحات  -

تاریخ انتشار 2011